37 research outputs found

    Clustering in Block Markov Chains

    Get PDF
    This paper considers cluster detection in Block Markov Chains (BMCs). These Markov chains are characterized by a block structure in their transition matrix. More precisely, the nn possible states are divided into a finite number of KK groups or clusters, such that states in the same cluster exhibit the same transition rates to other states. One observes a trajectory of the Markov chain, and the objective is to recover, from this observation only, the (initially unknown) clusters. In this paper we devise a clustering procedure that accurately, efficiently, and provably detects the clusters. We first derive a fundamental information-theoretical lower bound on the detection error rate satisfied under any clustering algorithm. This bound identifies the parameters of the BMC, and trajectory lengths, for which it is possible to accurately detect the clusters. We next develop two clustering algorithms that can together accurately recover the cluster structure from the shortest possible trajectories, whenever the parameters allow detection. These algorithms thus reach the fundamental detectability limit, and are optimal in that sense.Comment: 73 pages, 18 plots, second revisio

    Wireless network control of interacting Rydberg atoms

    Get PDF
    We identify a relation between the dynamics of ultracold Rydberg gases in which atoms experience a strong dipole blockade and spontaneous emission, and a stochastic process that models certain wireless random-access networks. We then transfer insights and techniques initially developed for these wireless networks to the realm of Rydberg gases, and explain how the Rydberg gas can be driven into crystal formations using our understanding of wireless networks. Finally, we propose a method to determine Rabi frequencies (laser intensities) such that particles in the Rydberg gas are excited with specified target excitation probabilities, providing control over mixed-state populations.Comment: 6 pages, 7 figures; includes corrections and improvements from the peer-review proces

    Almost Sure Convergence of Dropout Algorithms for Neural Networks

    Full text link
    We investigate the convergence and convergence rate of stochastic training algorithms for Neural Networks (NNs) that, over the years, have spawned from Dropout (Hinton et al., 2012). Modeling that neurons in the brain may not fire, dropout algorithms consist in practice of multiplying the weight matrices of a NN component-wise by independently drawn random matrices with {0,1}\{0,1\}-valued entries during each iteration of the Feedforward-Backpropagation algorithm. This paper presents a probability theoretical proof that for any NN topology and differentiable polynomially bounded activation functions, if we project the NN's weights into a compact set and use a dropout algorithm, then the weights converge to a unique stationary set of a projected system of Ordinary Differential Equations (ODEs). We also establish an upper bound on the rate of convergence of Gradient Descent (GD) on the limiting ODEs of dropout algorithms for arborescences (a class of trees) of arbitrary depth and with linear activation functions.Comment: 20 pages, 2 figure

    Achievable Performance in Product-Form Networks

    Full text link
    We characterize the achievable range of performance measures in product-form networks where one or more system parameters can be freely set by a network operator. Given a product-form network and a set of configurable parameters, we identify which performance measures can be controlled and which target values can be attained. We also discuss an online optimization algorithm, which allows a network operator to set the system parameters so as to achieve target performance metrics. In some cases, the algorithm can be implemented in a distributed fashion, of which we give several examples. Finally, we give conditions that guarantee convergence of the algorithm, under the assumption that the target performance metrics are within the achievable range.Comment: 50th Annual Allerton Conference on Communication, Control and Computing - 201

    Matrix concentration inequalities with dependent summands and sharp leading-order terms

    Full text link
    We establish sharp concentration inequalities for sums of dependent random matrices. Our results concern two models. First, a model where summands are generated by a ψ\psi-mixing Markov chain. Second, a model where summands are expressed as deterministic matrices multiplied by scalar random variables. In both models, the leading-order term is provided by free probability theory. This leading-order term is often asymptotically sharp and, in particular, does not suffer from the logarithmic dimensional dependence which is present in previous results such as the matrix Khintchine inequality. A key challenge in the proof is that techniques based on classical cumulants, which can be used in a setting with independent summands, fail to produce efficient estimates in the Markovian model. Our approach is instead based on Boolean cumulants and a change-of-measure argument. We discuss applications concerning community detection in Markov chains, random matrices with heavy-tailed entries, and the analysis of random graphs with dependent edges.Comment: 69 pages, 4 figure
    corecore